51 research outputs found
Integer Functions Suitable for Homomorphic Encryption over Finite Fields
Fully Homomorphic Encryption (FHE) gives the ability to evaluate any function over encrypted data. However, despite numerous improvements during the last decade, the computational overhead caused by homomorphic computations is still very important. As a consequence, optimizing the way of performing the computations homomorphically remains fundamental. Several popular FHE schemes such as BGV and BFV encode their data, and thus perform their computations, in finite fields. In this work, we study and exploit algebraic relations occurring in prime characteristic allowing to speed-up the homomorphic evaluation of several functions over prime fields.
More specifically we give several examples of unary functions: modulo , is power of , Hamming weight and Mod2\u27 whose homomorphic evaluation complexity over can be reduced from the generic bound homomorphic multiplications, to , , and respectively. Additionally we provide a proof of a recent claim regarding the structure of the polynomial interpolation of the less-than bivariate function which confirms that this function can be evaluated in homomorphic multiplications instead of over for
Vers une arithmétique efficace pour le chiffrement homomorphe basé sur le Ring-LWE
Fully homomorphic encryption is a kind of encryption offering the ability to manipulate encrypted data directly through their ciphertexts. In this way it is possible to process sensitive data without having to decrypt them beforehand, ensuring therefore the datas' confidentiality. At the numeric and cloud computing era this kind of encryption has the potential to considerably enhance privacy protection. However, because of its recent discovery by Gentry in 2009, we do not have enough hindsight about it yet. Therefore several uncertainties remain, in particular concerning its security and efficiency in practice, and should be clarified before an eventual widespread use. This thesis deals with this issue and focus on performance enhancement of this kind of encryption in practice. In this perspective we have been interested in the optimization of the arithmetic used by these schemes, either the arithmetic underlying the Ring Learning With Errors problem on which the security of these schemes is based on, or the arithmetic specific to the computations required by the procedures of some of these schemes. We have also considered the optimization of the computations required by some specific applications of homomorphic encryption, and in particular for the classification of private data, and we propose methods and innovative technics in order to perform these computations efficiently. We illustrate the efficiency of our different methods through different software implementations and comparisons to the related art.Le chiffrement totalement homomorphe est un type de chiffrement qui permet de manipuler directement des données chiffrées. De cette manière, il est possible de traiter des données sensibles sans avoir à les déchiffrer au préalable, permettant ainsi de préserver la confidentialité des données traitées. À l'époque du numérique à outrance et du "cloud computing" ce genre de chiffrement a le potentiel pour impacter considérablement la protection de la vie privée. Cependant, du fait de sa découverte récente par Gentry en 2009, nous manquons encore de recul à son propos. C'est pourquoi de nombreuses incertitudes demeurent, notamment concernant sa sécurité et son efficacité en pratique, et devront être éclaircies avant une éventuelle utilisation à large échelle.Cette thèse s'inscrit dans cette problématique et se concentre sur l'amélioration des performances de ce genre de chiffrement en pratique. Pour cela nous nous sommes intéressés à l'optimisation de l'arithmétique utilisée par ces schémas, qu'elle soit sous-jacente au problème du "Ring-Learning With Errors" sur lequel la sécurité des schémas considérés est basée, ou bien spécifique aux procédures de calculs requises par certains de ces schémas. Nous considérons également l'optimisation des calculs nécessaires à certaines applications possibles du chiffrement homomorphe, et en particulier la classification de données privées, de sorte à proposer des techniques de calculs innovantes ainsi que des méthodes pour effectuer ces calculs de manière efficace. L'efficacité de nos différentes méthodes est illustrée à travers des implémentations logicielles et des comparaisons aux techniques de l'état de l'art
Revisiting Homomorphic Encryption Schemes for Finite Fields
The Brakerski-Gentry-Vaikuntanathan (BGV) and Brakerski/ Fan-Vercauteren (BFV) schemes are the two main homomorphic encryption (HE) schemes to perform exact computations over finite fields and integers. Although the schemes work with the same plaintext space, there are significant differences in their noise management, algorithms for the core homomorphic multiplication operation, message encoding, and practical usability. The main goal of our work is to revisit both schemes, focusing on closing the gap between the schemes by improving their noise growth, computational complexity of the core algorithms, and usability. The other goal of our work is to provide both theoretical and experimental performance comparison of BGV and BFV.
More precisely, we propose an improved variant of BFV where the encryption operation is modified to significantly reduce the noise growth, which makes the BFV noise growth somewhat better than for BGV (in contrast to prior results showing that BGV has smaller noise growth for larger plaintext moduli). We also modify the homomorphic multiplication procedure, which is the main bottleneck in BFV, to reduce its algorithmic complexity. Our work introduces several other novel optimizations, including lazy scaling in BFV homomorphic multiplication and an improved BFV decryption procedure in the Residue Number System (RNS) representation. We also develop a usable variant of BGV as a more efficient alternative to BFV for common practical scenarios.
We implement our improved variants of BFV and BGV in PALISADE and evaluate their experimental performance for several benchmark computations. The experimental results suggest that our BGV implementation is faster for intermediate and large plaintext moduli, which are often used in practical scenarios with ciphertext packing, while our BFV implementation is faster for small plaintext moduli
Tight bound on NewHope failure probability
NewHope Key Encapsulation Mechanism (KEM) has been presented at USENIX 2016 by Alchim et al. and is one of the remaining lattice-based candidates to the post-quantum standardization initiated by the NIST. However, despite the relative simplicity of the protocol, the bound on the decapsulation failure probability resulting from the original analysis is not tight.
In this work we refine this analysis to get a tight upper-bound on this probability which happens to be much lower than what was originally evaluated. As a consequence we propose a set of alternnative parameters, increasing the security and the compactness of the scheme.
However using a smaller modulus prevent the use of a full NTT algorithm to perform multiplications of elements in dimension 512 or 1024. Nonetheless, similarly to previous works, we combine different multiplication algorithms and show that our
new parameters are competitive on a constant time vectorized implementation. Our most compact parameters bring a speed-
up of 17% (resp. 11%) in performance but allow to gain more than 19% over the bandwidth requirements and to increase the
security of 10% (resp. 7%) in dimension 512 (resp. 1024)
A Full RNS Variant of FV like Somewhat Homomorphic Encryption Schemes
Since Gentry\u27s breakthrough work in 2009, homomorphic cryptography has received a widespread attention. Implementation of a fully homomorphic cryptographic scheme is however still highly expensive. Somewhat Homomorphic Encryption (SHE) schemes, on the other hand, allow only a limited number of arithmetical operations in the encrypted domain, but are more practical. Many SHE schemes have been proposed, among which the most competitive ones rely on (Ring-) Learning With Error (RLWE) and operations occur on high-degree polynomials with large coefficients. This work focuses in particular on the Chinese Remainder Theorem representation (a.k.a. Residue Number Systems) applied to large coefficients. In SHE schemes like that of Fan and Vercauteren (FV), such a representation remains hardly compatible with procedures involving coefficient-wise division and rounding required in decryption and homomorphic multiplication. This paper suggests a way to entirely eliminate the need for multi-precision arithmetic, and presents techniques to enable a full RNS implementation of FV-like schemes. For dimensions between and , we report speed-ups from to for decryption, and from to for multiplication
Note on the noise growth of the RNS variants of the BFV scheme
In a recent work, Al Badawi et al. have noticed a different behaviour of the noise growth in practice between the two RNS variants of BFV from Bajard et al. and Halevi et al. Their experiments, based on the PALISADE and SEAL libraries, have shown that the multiplicative depth reached, in practice, by the first one was considerably smaller than the second one while theoretically equivalent in the worst-case. Their interpretation of this phenomenon was that the approximations used by Bajard et al. made the expansion factor behave differently than what the Central Limit Theorem would predict.
We have realized that this difference actually comes from the implementation of the SmMRq procedure of Bajard et al. in SEAL and PALISADE which is slightly different than what Bajard et al. had proposed. In this note we show that by fixing this small difference, the multiplicative depth of both variants is actually the same in practice
An HPR variant of the FV scheme: Computationally Cheaper, Asymptotically Faster
State-of-the-art implementations of homomorphic encryption exploit the Fan and Vercauteren (FV) scheme and the Residue Number System (RNS). While the RNS breaks down large integer arithmetic into smaller independent channels, its non-positional nature makes operations such as division and rounding hard to implement, and makes the representation of small values inefficient. In this work, we propose the application of the Hybrid Position-Residues Number System representation to the FV scheme. This is a positional representation of large radix where the digits are represented in RNS. It inherits the benefits from RNS and allows to accelerate the critical division and rounding operations while also making the representation of smaller values more compact. This directly benefits the decryption and the homomorphic multiplication procedures, reducing their asymptotic complexity, in dimension , from to and from to , respectively. This has also resulted in noticeable speedups when experimentally compared to related art RNS implementations
Efficient reductions in cyclotomic rings - Application to R-LWE based FHE schemes
With Fully Homomorphic Encryption (FHE), it is possible to process encrypted data without having an access to the private-key. This has a
wide range of applications, most notably the offloading of
sensitive data processing. Most research on FHE has focused on the
improvement of its efficiency, namely by introducing schemes based on
the Ring-Learning With Errors (R-LWE) problem, and techniques such as batching, which allows for the encryption of
multiple messages in the same ciphertext. Much of the related research has focused on RLWE relying on power-of-two cyclotomic polynomials. While it is possible to achieve efficient arithmetic with such polynomials, one cannot exploit batching. Herein, the efficiency of ring arithmetic underpinned by non-power-of-two cyclomotic polynomials is analysed and improved. Two methods for polynomial reduction are proposed, one based on the Barrett reduction and
the other on a Montgomery representation. Speed-ups up to 2.66 are obtained for the reduction operation using an i7-5960X processor when compared with a straightforward implementation of the Barrett reduction. Moreover, the proposed methods are exploited to enhance homomorphic multiplication of FV and BGV encryption schemes, producing experimental speed-ups up to 1.37
The zCOSMOS 10k-Bright Spectroscopic Sample
We present spectroscopic redshifts of a large sample of galaxies with I_(AB) < 22.5 in the COSMOS field, measured from spectra of 10,644 objects that have been obtained in the first two years of observations in the zCOSMOS-bright redshift survey. These include a statistically complete subset of 10,109 objects. The average accuracy of individual redshifts is 110 km s^(–1), independent of redshift. The reliability of individual redshifts is described by a Confidence Class that has been empirically calibrated through repeat spectroscopic observations of over 600 galaxies. There is very good agreement between spectroscopic and photometric redshifts for the most secure Confidence Classes. For the less secure Confidence Classes, there is a good correspondence between the fraction of objects with a consistent photometric redshift and the spectroscopic repeatability, suggesting that the photometric redshifts can be used to indicate which of the less secure spectroscopic redshifts are likely right and which are probably wrong, and to give an indication of the nature of objects for which we failed to determine a redshift. Using this approach, we can construct a spectroscopic sample that is 99% reliable and which is 88% complete in the sample as a whole, and 95% complete in the redshift range 0.5 < z < 0.8. The luminosity and mass completeness levels of the zCOSMOS-bright sample of galaxies is also discussed
Lyman-alpha Forest Tomography from Background Galaxies: The First Megaparsec-Resolution Large-Scale Structure Map at z>2
We present the first observations of foreground Lyman- forest
absorption from high-redshift galaxies, targeting 24 star-forming galaxies
(SFGs) with within a region of the COSMOS
field. The transverse sightline separation is
comoving, allowing us to create a tomographic reconstruction of the 3D
Ly forest absorption field over the redshift range . The resulting map covers in the transverse plane and
along the line-of-sight with a spatial resolution of , and is the first high-fidelity map of large-scale
structure on scales at . Our map reveals significant
structures with extent, including several
spanning the entire transverse breadth, providing qualitative evidence for the
filamentary structures predicted to exist in the high-redshift cosmic web.
Simulated reconstructions with the same sightline sampling, spectral
resolution, and signal-to-noise ratio recover the salient structures present in
the underlying 3D absorption fields. Using data from other surveys, we
identified 18 galaxies with known redshifts coeval with our map volume enabling
a direct comparison to our tomographic map. This shows that galaxies
preferentially occupy high-density regions, in qualitative agreement with the
same comparison applied to simulations. Our results establishes the feasibility
of the CLAMATO survey, which aims to obtain Ly forest spectra for SFGs over of the COSMOS field, in order to map
out IGM large-scale structure at over a large
volume .Comment: Accepted for publication in Astrophysical Journal Letters; 8 pages
and 5 figure
- …